Goto

Collaborating Authors

 mental health issue


Social Media for Mental Health: Data, Methods, and Findings

Kamarudin, Nur Shazwani, Beigi, Ghazaleh, Manikonda, Lydia, Liu, Huan

arXiv.org Artificial Intelligence

There is an increasing number of virtual communities and forums available on the web. With social media, people can freely communicate and share their thoughts, ask personal questions, and seek peer-support, especially those with conditions that are highly stigmatized, without revealing personal identity. We study the state-of-the-art research methodologies and findings on mental health challenges like depression, anxiety, suicidal thoughts, from the pervasive use of social media data. We also discuss how these novel thinking and approaches can help to raise awareness of mental health issues in an unprecedented way. Specifically, this chapter describes linguistic, visual, and emotional indicators expressed in user disclosures. The main goal of this chapter is to show how this new source of data can be tapped to improve medical practice, provide timely support, and influence government or policymakers. In the context of social media for mental health issues, this chapter categorizes social media data used, introduces different deployed machine learning, feature engineering, natural language processing, and surveys methods and outlines directions for future research.


More than a million people every week show suicidal intent when chatting with ChatGPT, OpenAI estimates

The Guardian

OpenAI claimed that its recent GPT-5 update improved user safety in a model evaluation involving more than 1,000 self-harm and suicide conversations. OpenAI claimed that its recent GPT-5 update improved user safety in a model evaluation involving more than 1,000 self-harm and suicide conversations. More than a million ChatGPT users each week send messages that include "explicit indicators of potential suicidal planning or intent", according to a blogpost published by OpenAI on Monday. The finding, part of an update on how the chatbot handles sensitive conversations, is one of the most direct statements from the artificial intelligence giant on the scale of how AI can exacerbate mental health issues. In addition to its estimates on suicidal ideations and related interactions, OpenAI also said that about 0.07% of users active in a given week - about 560,000 of its touted 800m weekly users - show "possible signs of mental health emergencies related to psychosis or mania".


People Who Say They're Experiencing AI Psychosis Beg the FTC for Help

WIRED

People Who Say They're Experiencing AI Psychosis Beg the FTC for Help The Federal Trade Commission received 200 complaints mentioning ChatGPT between November 2022 and August 2025. Several attributed delusions, paranoia, and spiritual crises to the chatbot. On March 13, a woman from Salt Lake City, Utah called the Federal Trade Commission to file a complaint against OpenAI's ChatGPT. She claimed to be acting "on behalf of her son, who was experiencing a delusional breakdown." "The consumer's son has been interacting with an AI chatbot called ChatGPT, which is advising him not to take his prescribed medication and telling him that his parents are dangerous," reads the FTC's summary of the call.


The key health bills California Gov. Newsom signed this week focused on how technology is impacting kids

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. The key health bills California Gov. Newsom signed this week focused on how technology is impacting kids California Gov. Gavin Newsom speaks at Belvedere Middle School in Los Angeles on Oct. 8, 2025, about a week before he signed legislation to improve nutrition in schools across the state. This is read by an automated voice. Please report any issues or inconsistencies here . Gov. Gavin Newsom signed several bills to regulate AI in California, especially for children.


MHINDR -- a DSM5 based mental health diagnosis and recommendation framework using LLM

Agarwal, Vaishali, Thukral, Sachin, Chatterjee, Arnab

arXiv.org Artificial Intelligence

Mental health forums offer valuable insights into psychological issues, stressors, and potential solutions. We propose MHINDR, a large language model (LLM) based framework integrated with DSM-5 criteria to analyze user-generated text, diagnose mental health conditions, and generate personalized interventions and insights for mental health practitioners. Our approach emphasizes on the extraction of temporal information for accurate diagnosis and symptom progression tracking, together with psychological features to create comprehensive mental health summaries of users. The framework delivers scalable, customizable, and data-driven therapeutic recommendations, adaptable to diverse clinical contexts, patient needs, and workplace well-being programs.


M-HELP: Using Social Media Data to Detect Mental Health Help-Seeking Signals

Sathvik, MSVPJ, Shaik, Zuhair Hasan, Gupta, Vivek

arXiv.org Artificial Intelligence

Mental health disorders are a global crisis. While various datasets exist for detecting such disorders, there remains a critical gap in identifying individuals actively seeking help. This paper introduces a novel dataset, M-Help, specifically designed to detect help-seeking behavior on social media. The dataset goes beyond traditional labels by identifying not only help-seeking activity but also specific mental health disorders and their underlying causes, such as relationship challenges or financial stressors. AI models trained on M-Help can address three key tasks: identifying help-seekers, diagnosing mental health conditions, and uncovering the root causes of issues.


Is your therapist AI? ChatGPT goes viral on social media for its role as Gen Z's new therapist

FOX News

AI chatbots are stepping into the therapist's chair – and not everyone is thrilled about it. In March alone, 16.7 million posts from TikTok users discussed using ChatGPT as a therapist, but mental health professionals are raising red flags over the growing trend that sees artificial intelligence tools being used in their place to treat anxiety, depression and other mental health challenges. "ChatGPT singlehandedly has made me a less anxious person when it comes to dating, when it comes to health, when it comes to career," user @christinazozulya shared in a TikTok video posted to her profile last month. "Any time I have anxiety, instead of bombarding my parents with texts like I used to or texting a friend or crashing out essentially… before doing that, I always voice memo my thoughts into ChatGPT, and it does a really good job at calming me down and providing me with that immediate relief that unfortunately isn't as accessible to everyone." The ChatGPT logo on a laptop computer arranged in New York, US, on Thursday, March 9, 2023.


Robotic dog helps those facing mental health and cognitive challenges

FOX News

Jennie the artificial intelligence-powered robotic dog is designed to provide comfort and companionship to those with mental health challenges. U.S. robotics company Tombot has introduced Jennie, an innovative AI-powered robotic pet designed to provide comfort and companionship to those facing cognitive health challenges. This groundbreaking creation is set to transform the lives of millions struggling with dementia, mild cognitive impairment and various mental health issues. Jennie's inception stems from a personal tragedy experienced by Tombot CEO Tom Stevens. When his mother, Nancy, was diagnosed with Alzheimer's, the family had to make the heart-wrenching decision to rehome her beloved dog, Golden Bear.


SouLLMate: An Application Enhancing Diverse Mental Health Support with Adaptive LLMs, Prompt Engineering, and RAG Techniques

Guo, Qiming, Tang, Jinwen, Sun, Wenbo, Tang, Haoteng, Shang, Yi, Wang, Wenlu

arXiv.org Artificial Intelligence

Mental health issues significantly impact individuals' daily lives, yet many do not receive the help they need even with available online resources. This study aims to provide diverse, accessible, stigma-free, personalized, and real-time mental health support through cutting-edge AI technologies. It makes the following contributions: (1) Conducting an extensive survey of recent mental health support methods to identify prevalent functionalities and unmet needs. (2) Introducing SouLLMate, an adaptive LLM-driven system that integrates LLM technologies, Chain, Retrieval-Augmented Generation (RAG), prompt engineering, and domain knowledge. This system offers advanced features such as Risk Detection and Proactive Guidance Dialogue, and utilizes RAG for personalized profile uploads and Conversational Information Extraction. (3) Developing novel evaluation approaches for preliminary assessments and risk detection via professionally annotated interview data and real-life suicide tendency data. (4) Proposing the Key Indicator Summarization (KIS), Proactive Questioning Strategy (PQS), and Stacked Multi-Model Reasoning (SMMR) methods to enhance model performance and usability through context-sensitive response adjustments, semantic coherence evaluations, and enhanced accuracy of long-context reasoning in language models. This study contributes to advancing mental health support technologies, potentially improving the accessibility and effectiveness of mental health care globally.


MindGuard: Towards Accessible and Sitgma-free Mental Health First Aid via Edge LLM

Ji, Sijie, Zheng, Xinzhe, Sun, Jiawei, Chen, Renqi, Gao, Wei, Srivastava, Mani

arXiv.org Artificial Intelligence

Mental health disorders are among the most prevalent diseases worldwide, affecting nearly one in four people. Despite their widespread impact, the intervention rate remains below 25%, largely due to the significant cooperation required from patients for both diagnosis and intervention. The core issue behind this low treatment rate is stigma, which discourages over half of those affected from seeking help. This paper presents MindGuard, an accessible, stigma-free, and professional mobile mental healthcare system designed to provide mental health first aid. The heart of MindGuard is an innovative edge LLM, equipped with professional mental health knowledge, that seamlessly integrates objective mobile sensor data with subjective Ecological Momentary Assessment records to deliver personalized screening and intervention conversations. We conduct a broad evaluation of MindGuard using open datasets spanning four years and real-world deployment across various mobile devices involving 20 subjects for two weeks. Remarkably, MindGuard achieves results comparable to GPT-4 and outperforms its counterpart with more than 10 times the model size. We believe that MindGuard paves the way for mobile LLM applications, potentially revolutionizing mental healthcare practices by substituting self-reporting and intervention conversations with passive, integrated monitoring within daily life, thus ensuring accessible and stigma-free mental health support.